Skip to content

Conversation

@xin3he
Copy link
Contributor

@xin3he xin3he commented Nov 19, 2025

PR Type

Enhancement


Description

  • Added tuning result table printing

  • Updated max_trials based on available configurations


Diagram Walkthrough

flowchart LR
  A["Add _print_trial_results_table method"] -- "Prints formatted table" --> B["add_trial_result method"]
  C["Update max_trials in init_tuning"] -- "Based on available configs" --> D["init_tuning function"]
Loading

File Walkthrough

Relevant files
Enhancement
base_tuning.py
Enhance tuning results and configuration handling               

neural_compressor/common/base_tuning.py

  • Added _print_trial_results_table method for formatted table output
  • Updated add_trial_result to call _print_trial_results_table
  • Modified init_tuning to adjust max_trials based on available
    configurations
+48/-1   

Signed-off-by: He, Xin3 <[email protected]>
@PRAgent4INC
Copy link
Collaborator

PR Reviewer Guide 🔍

Here are some key observations to aid the review process:

⏱️ Estimated effort to review: 3 🔵🔵🔵⚪⚪
🧪 No relevant tests
🔒 No security concerns identified
⚡ Recommended focus areas for review

Possible Issue

The baseline check in _print_trial_results_table could lead to incorrect calculations if baseline is None. Ensure that all calculations handle None values correctly.

baseline_val = self.baseline if self.baseline is not None else 0.0
baseline_str = f"{baseline_val:.4f}" if self.baseline is not None else "N/A"
target_threshold_str = (
    f"{baseline_val * (1 - self.tuning_config.tolerable_loss):.4f}" if self.baseline is not None else "N/A"
)

# Calculate relative loss if baseline is available
relative_loss_val = 0.0
relative_loss_str = "N/A"
if self.baseline is not None:
    relative_loss_val = (baseline_val - trial_result) / baseline_val
    relative_loss_str = f"{relative_loss_val*100:.2f}%"

# Get best result so far
best_result = max(record.trial_result for record in self.tuning_history)

# Status indicator with emoji
if self.baseline is not None and trial_result >= (baseline_val * (1 - self.tuning_config.tolerable_loss)):
    status = "✅ PASSED"
else:
    status = "❌ FAILED"

# Prepare data for Statistics table with combined fields
field_names = ["📊 Metric", "📈 Value"]
output_data = [
    ["Trial / Progress", f"{len(self.tuning_history)}/{self.tuning_config.max_trials}"],
    ["Baseline / Target", f"{baseline_str} / {target_threshold_str}"],
    ["Current / Status", f"{trial_result:.4f} | {status}"],
    ["Best / Relative Loss", f"{best_result:.4f} / {relative_loss_str}"],
]

# Use Statistics class to print the table
Statistics(
    output_data, header=f"🎯 Auto-Tune Trial #{trial_index} Results", field_names=field_names
).print_stat()
Performance Concern

Printing a table after every trial could impact performance, especially for large numbers of trials. Consider adding a flag to enable/disable this feature or batching the prints.

    self._print_trial_results_table(trial_index, trial_result)

def _print_trial_results_table(self, trial_index: int, trial_result: Union[int, float]) -> None:
    """Print trial results in a formatted table using Statistics class."""
    baseline_val = self.baseline if self.baseline is not None else 0.0
    baseline_str = f"{baseline_val:.4f}" if self.baseline is not None else "N/A"
    target_threshold_str = (
        f"{baseline_val * (1 - self.tuning_config.tolerable_loss):.4f}" if self.baseline is not None else "N/A"
    )

    # Calculate relative loss if baseline is available
    relative_loss_val = 0.0
    relative_loss_str = "N/A"
    if self.baseline is not None:
        relative_loss_val = (baseline_val - trial_result) / baseline_val
        relative_loss_str = f"{relative_loss_val*100:.2f}%"

    # Get best result so far
    best_result = max(record.trial_result for record in self.tuning_history)

    # Status indicator with emoji
    if self.baseline is not None and trial_result >= (baseline_val * (1 - self.tuning_config.tolerable_loss)):
        status = "✅ PASSED"
    else:
        status = "❌ FAILED"

    # Prepare data for Statistics table with combined fields
    field_names = ["📊 Metric", "📈 Value"]
    output_data = [
        ["Trial / Progress", f"{len(self.tuning_history)}/{self.tuning_config.max_trials}"],
        ["Baseline / Target", f"{baseline_str} / {target_threshold_str}"],
        ["Current / Status", f"{trial_result:.4f} | {status}"],
        ["Best / Relative Loss", f"{best_result:.4f} / {relative_loss_str}"],
    ]

    # Use Statistics class to print the table
    Statistics(
        output_data, header=f"🎯 Auto-Tune Trial #{trial_index} Results", field_names=field_names
    ).print_stat()

@PRAgent4INC
Copy link
Collaborator

PR Code Suggestions ✨

Explore these optional code suggestions:

CategorySuggestion                                                                                                                                    Impact
Possible issue
Handle empty history

Handle the case where tuning_history might be empty to avoid errors.

neural_compressor/common/base_tuning.py [445]

-best_result = max(record.trial_result for record in self.tuning_history)
+best_result = max((record.trial_result for record in self.tuning_history), default=float('-inf'))
Suggestion importance[1-10]: 8

__

Why: Handles the edge case where tuning_history might be empty, preventing potential errors. This is a significant improvement.

Medium
Prevent division by zero

Ensure that baseline_val is not zero to avoid division by zero errors.

neural_compressor/common/base_tuning.py [441-444]

 relative_loss_val = 0.0
 relative_loss_str = "N/A"
-if self.baseline is not None:
+if self.baseline is not None and baseline_val != 0:
     relative_loss_val = (baseline_val - trial_result) / baseline_val
     relative_loss_str = f"{relative_loss_val*100:.2f}%"
Suggestion importance[1-10]: 7

__

Why: Ensures that division by zero is avoided, which is important for robustness. However, this is a minor improvement.

Medium
General
Improve table formatting

Ensure consistent formatting and alignment in the table.

neural_compressor/common/base_tuning.py [454-460]

-field_names = ["📊 Metric", "📈 Value"]
+field_names = ["Metric", "Value"]
 output_data = [
     ["Trial / Progress", f"{len(self.tuning_history)}/{self.tuning_config.max_trials}"],
     ["Baseline / Target", f"{baseline_str} / {target_threshold_str}"],
     ["Current / Status", f"{trial_result:.4f} | {status}"],
     ["Best / Relative Loss", f"{best_result:.4f} / {relative_loss_str}"],
 ]
Suggestion importance[1-10]: 5

__

Why: Improves the formatting and alignment of the table, enhancing readability. This is a minor improvement.

Low

@xin3he
Copy link
Contributor Author

xin3he commented Nov 19, 2025

image image

@xin3he xin3he requested a review from chensuyue November 19, 2025 05:55
actual_config_count = len(config_loader.config_set)
if tuning_config.max_trials > actual_config_count:
tuning_config.max_trials = actual_config_count

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please raise a message to let the user be aware of that change, others LGTM. Thanks!

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

max_trails=100 is the default, which means this message will happen in most cases. To avoid distracting users, I recommend not printing information.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants